44 research outputs found

    Quantitative Safety: Linking Proof-Based Verification with Model Checking for Probabilistic Systems

    Full text link
    This paper presents a novel approach for augmenting proof-based verification with performance-style analysis of the kind employed in state-of-the-art model checking tools for probabilistic systems. Quantitative safety properties usually specified as probabilistic system invariants and modeled in proof-based environments are evaluated using bounded model checking techniques. Our specific contributions include the statement of a theorem that is central to model checking safety properties of proof-based systems, the establishment of a procedure; and its full implementation in a prototype system (YAGA) which readily transforms a probabilistic model specified in a proof-based environment to its equivalent verifiable PRISM model equipped with reward structures. The reward structures capture the exact interpretation of the probabilistic invariants and can reveal succinct information about the model during experimental investigations. Finally, we demonstrate the novelty of the technique on a probabilistic library case study

    Markovian Testing Equivalence and Exponentially Timed Internal Actions

    Full text link
    In the theory of testing for Markovian processes developed so far, exponentially timed internal actions are not admitted within processes. When present, these actions cannot be abstracted away, because their execution takes a nonzero amount of time and hence can be observed. On the other hand, they must be carefully taken into account, in order not to equate processes that are distinguishable from a timing viewpoint. In this paper, we recast the definition of Markovian testing equivalence in the framework of a Markovian process calculus including exponentially timed internal actions. Then, we show that the resulting behavioral equivalence is a congruence, has a sound and complete axiomatization, has a modal logic characterization, and can be decided in polynomial time

    Verifying Real-Time Systems using Explicit-time Description Methods

    Get PDF
    Timed model checking has been extensively researched in recent years. Many new formalisms with time extensions and tools based on them have been presented. On the other hand, Explicit-Time Description Methods aim to verify real-time systems with general untimed model checkers. Lamport presented an explicit-time description method using a clock-ticking process (Tick) to simulate the passage of time together with a group of global variables for time requirements. This paper proposes a new explicit-time description method with no reliance on global variables. Instead, it uses rendezvous synchronization steps between the Tick process and each system process to simulate time. This new method achieves better modularity and facilitates usage of more complex timing constraints. The two explicit-time description methods are implemented in DIVINE, a well-known distributed-memory model checker. Preliminary experiment results show that our new method, with better modularity, is comparable to Lamport's method with respect to time and memory efficiency

    Modelling Clock Synchronization in the Chess gMAC WSN Protocol

    Get PDF
    We present a detailled timed automata model of the clock synchronization algorithm that is currently being used in a wireless sensor network (WSN) that has been developed by the Dutch company Chess. Using the Uppaal model checker, we establish that in certain cases a static, fully synchronized network may eventually become unsynchronized if the current algorithm is used, even in a setting with infinitesimal clock drifts

    Metagenomic Nanopore sequencing of influenza virus direct from clinical respiratory samples

    Get PDF
    Influenza is a major global public health threat as a result of its highly pathogenic variants, large zoonotic reservoir, and pandemic potential. Metagenomic viral sequencing offers the potential for a diagnostic test for influenza virus which also provides insights on transmission, evolution, and drug resistance and simultaneously detects other viruses. We therefore set out to apply the Oxford Nanopore Technologies sequencing method to metagenomic sequencing of respiratory samples. We generated influenza virus reads down to a limit of detection of 102 to 103 genome copies/ml in pooled samples, observing a strong relationship between the viral titer and the proportion of influenza virus reads (P = 4.7 × 10−5). Applying our methods to clinical throat swabs, we generated influenza virus reads for 27/27 samples with mid-to-high viral titers (cycle threshold [CT] values, 99% complete sequences for all eight gene segments. We also detected a human coronavirus coinfection in one clinical sample. While further optimization is required to improve sensitivity, this approach shows promise for the Nanopore platform to be used in the diagnosis and genetic analysis of influenza virus and other respiratory viruses

    Reconceptualizing CSR in the media industry as relational accountability

    Get PDF
    In this paper, we reconceptualize CSR in the media industries by combining empirical data with theoretical perspectives emerging from the communication studies and business ethics literature. We develop a new conception of what corporate responsibility in media organizations may mean in real terms by bringing Bardoel and d’Haenens’ (European Journal of Communication 19 165–194 2004) discussion of the different dimensions of media accountability into conversation with the empirical results from three international focus group studies, conducted in France, the USA and South Africa. To enable a critical perspective on our findings, we perform a philosophical analysis of its implications for professional, public, market, and political accountability in the media, drawing on the insights of Paul Virilio. We come to the conclusion that though some serious challenges to media accountability exist, the battle for responsible media industries is not lost. In fact, the speed characterizing the contemporary media environment may hold some promise for fostering the kind of relational accountability that could underpin a new understanding of CSR in the media

    Prognostic model to predict postoperative acute kidney injury in patients undergoing major gastrointestinal surgery based on a national prospective observational cohort study.

    Get PDF
    Background: Acute illness, existing co-morbidities and surgical stress response can all contribute to postoperative acute kidney injury (AKI) in patients undergoing major gastrointestinal surgery. The aim of this study was prospectively to develop a pragmatic prognostic model to stratify patients according to risk of developing AKI after major gastrointestinal surgery. Methods: This prospective multicentre cohort study included consecutive adults undergoing elective or emergency gastrointestinal resection, liver resection or stoma reversal in 2-week blocks over a continuous 3-month period. The primary outcome was the rate of AKI within 7 days of surgery. Bootstrap stability was used to select clinically plausible risk factors into the model. Internal model validation was carried out by bootstrap validation. Results: A total of 4544 patients were included across 173 centres in the UK and Ireland. The overall rate of AKI was 14·2 per cent (646 of 4544) and the 30-day mortality rate was 1·8 per cent (84 of 4544). Stage 1 AKI was significantly associated with 30-day mortality (unadjusted odds ratio 7·61, 95 per cent c.i. 4·49 to 12·90; P < 0·001), with increasing odds of death with each AKI stage. Six variables were selected for inclusion in the prognostic model: age, sex, ASA grade, preoperative estimated glomerular filtration rate, planned open surgery and preoperative use of either an angiotensin-converting enzyme inhibitor or an angiotensin receptor blocker. Internal validation demonstrated good model discrimination (c-statistic 0·65). Discussion: Following major gastrointestinal surgery, AKI occurred in one in seven patients. This preoperative prognostic model identified patients at high risk of postoperative AKI. Validation in an independent data set is required to ensure generalizability

    Identifying Undervalued Players in Fantasy Football

    Get PDF
    In this paper we present a model to predict player performance in fantasy football. In particular, identifying high-performance players can prove to be a difficult problem, as there are on occasion players capable of high performance whose past metrics give no indication of this capacity. These sleepers \u27 are often undervalued, and the acquisition of such players can have notable impact on a fantasy football team\u27s overall performance. We constructed a regression model that accounts for players\u27 past performance and athletic metrics to predict their future performance. The model we built performs favorably in predicting athlete performance in relation to other models, though this performance is heavily reliant upon the accuracy of estimates of athletes\u27 workloads

    Proceedings First Workshop on Quantitative Formal Methods: Theory and Applications

    Get PDF
    This volume contains the papers presented at the 1st workshop on Quantitative Formal Methods: Theory and Applications, which was held in Eindhoven on 3 November 2009 as part of the International Symposium on Formal Methods 2009. This volume contains the final versions of all contributions accepted for presentation at the workshop
    corecore